Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning. While they have been studied extensively for classification and regression problems, how they apply to ranking problems, where the data and metrics have a list structure, is not well understood. Theoretically, we establish a domain adaptation generalization bound for ranking under listwise metrics such as MRR and NDCG. The bound suggests an adaptation method via learning list-level domain-invariant feature representations, whose benefits are empirically demonstrated by unsupervised domain adaptation experiments on real-world ranking tasks, including passage reranking. A key message is that for domain adaptation, the representations should be analyzed at the same level at which the metric is computed, as we show that learning invariant representations at the list level is most effective for adaptation on ranking problems.
translated by 谷歌翻译
Conventional closed-world information extraction (IE) approaches rely on human ontologies to define the scope for extraction. As a result, such approaches fall short when applied to new domains. This calls for systems that can automatically infer new types from given corpora, a task which we refer to as type discovery. To tackle this problem, we introduce the idea of type abstraction, where the model is prompted to generalize and name the type. Then we use the similarity between inferred names to induce clusters. Observing that this abstraction-based representation is often complementary to the entity/trigger token representation, we set up these two representations as two views and design our model as a co-training framework. Our experiments on multiple relation extraction and event extraction datasets consistently show the advantage of our type abstraction approach. Code available at https://github.com/raspberryice/type-discovery-abs.
translated by 谷歌翻译
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, for LLMs beyond 100 billion parameters, existing methods cannot maintain accuracy or do not run efficiently on hardware. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs that can be implemented efficiently. We observe that systematic outliers appear at fixed activation channels. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the GEMMs in LLMs, including OPT-175B, BLOOM-176B, and GLM-130B. SmoothQuant has better hardware efficiency than existing techniques using mixed-precision activation quantization or weight-only quantization. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. Thanks to the hardware-friendly design, we integrate SmoothQuant into FasterTransformer, a state-of-the-art LLM serving framework, and achieve faster inference speed with half the number of GPUs compared to FP16. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code is available at: https://github.com/mit-han-lab/smoothquant.
translated by 谷歌翻译
Humans can classify an unseen category by reasoning on its language explanations. This ability is owing to the compositional nature of language: we can combine previously seen concepts to describe the new category. For example, we might describe mavens as "a kind of large birds with black feathers", so that others can use their knowledge of concepts "large birds" and "black feathers" to recognize a maven. Inspired by this observation, in this work we tackle zero-shot classification task by logically parsing and reasoning on natural language explanations. To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations). While previous methods usually regard textual information as implicit features, CLORE parses the explanations into logical structure the and then reasons along this structure on the input to produce a classification score. Experimental results on explanation-based zero-shot classification benchmarks demonstrate that CLORE is superior to baselines, mainly because it performs better on tasks requiring more logical reasoning. Alongside classification decisions, CLORE can provide the logical parsing and reasoning process as a form of rationale. Through empirical analysis we demonstrate that CLORE is also less affected by linguistic biases than baselines.
translated by 谷歌翻译
During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users tend to make gradual changes to the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited regions. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With 1.2%-area edited regions, our method reduces the computation of DDIM by 7.5$\times$ and GauGAN by 18$\times$ while preserving the visual fidelity. With SIGE, we accelerate the speed of DDIM by 3.0x on RTX 3090 and 6.6$\times$ on Apple M1 Pro CPU, and GauGAN by 4.2$\times$ on RTX 3090 and 14$\times$ on Apple M1 Pro CPU.
translated by 谷歌翻译
了解神经网络的决策过程很难。解释的一种重要方法是将其决定归因于关键特征。尽管提出了许多算法,但其中大多数仅改善了模型的忠诚。但是,真实的环境包含许多随机噪声,这可能会导致解释中的波动。更严重的是,最近的作品表明,解释算法容易受到对抗性攻击的影响。所有这些使解释很难在实际情况下信任。为了弥合这一差距,我们提出了一种模型 - 不稳定方法\ emph {特征归因}(METFA)的中位数测试,以量化不确定性并提高使用理论保证的解释算法的稳定性。 METFA具有以下两个函数:(1)检查一个特征是显着重要还是不重要,并生成METFA相关的映射以可视化结果; (2)计算特征归因评分的置信区间,并生成一个平滑的图表以提高解释的稳定性。实验表明,METFA提高了解释的视觉质量,并在保持忠诚的同时大大减少了不稳定。为了定量评估不同噪音设置下解释的忠诚,我们进一步提出了几个强大的忠诚指标。实验结果表明,METFA平滑的解释可以显着提高稳健的忠诚。此外,我们使用两种方案来显示METFA在应用程序中的潜力。首先,当应用于SOTA解释方法来定位语义分割模型的上下文偏见时,METFA很重要的解释使用较小的区域来维持99 \%+忠实。其次,当通过不同的以解释为导向的攻击进行测试时,METFA可以帮助捍卫香草,以及自适应的对抗性攻击,以防止解释。
translated by 谷歌翻译
由于异质访问点(APS)的性质,负载平衡(LB)是混合灯保真度(LIFI)和无线保真度(WIFI)网络(HLWNETS)的挑战性问题。机器学习有可能以近乎最佳的网络性能为培训过程提供复杂性的LB解决方案。但是,当网络环境(尤其是用户数量)更改时,需要进行最先进的(SOTA)学习辅助LB方法,这大大限制了其实用性。在本文中,提出了一个新颖的深神经网络(DNN)结构,称为自适应目标条件神经网络(A-TCNN),该结构在其他用户的条件下为一个目标用户进行AP选择。此外,开发了一种自适应机制,可以通过分配数据速率要求将较大数量的用户映射到较大的数字,而不会影响目标用户的AP选择结果。这使提出的方法可以处理不同数量的用户,而无需再进行重新培训。结果表明,A-TCNN实现了非常接近测试数据集的网络吞吐量,差距小于3%。还证明,A-TCNN可以获得与两个SOTA基准相当的网络吞吐量,同时最多将运行时降低了三个数量级。
translated by 谷歌翻译
Video-Text检索(VTR)是多模式理解的一项有吸引力但具有挑战性的任务,该任务旨在在给定查询(视频)的情况下搜索相关的视频(文本)。现有方法通常采用完全异构的视觉文本信息来对齐视频和文本,同时缺乏对这两种模式中均匀的高级语义信息的认识。为了填补这一差距,在这项工作中,我们提出了一个新颖的视觉语言对准模型,名为VTR Hise,该模型通过合并显式高级语义来改善跨模式的表示。首先,我们探讨了显式高级语义的层次结构属性,并将其进一步分为两个级别,即离散的语义和整体语义。具体来说,对于视觉分支,我们利用了现成的语义实体预测器来生成离散的高级语义。同时,采用训练有素的视频字幕模型来输出整体高级语义。至于文本方式,我们将文本分为三个部分,包括发生,动作和实体。特别是,这种情况对应于整体高级语义,同时动作和实体代表离散的语义。然后,利用不同的图推理技术来促进整体和离散的高级语义之间的相互作用。广泛的实验表明,借助明确的高级语义,我们的方法在包括MSR-VTT,MSVD和DIDEMO在内的三个基准数据集上实现了优于最先进方法的卓越性能。
translated by 谷歌翻译
样本分配在现代对象检测方法中起着重要的作用。但是,大多数现有的方法都依靠手动设计来分配正 /负样本,这些样本并未明确建立样本分配和对象检测性能之间的关系。在这项工作中,我们提出了一种基于高参数搜索的新型动态样本分配方案。我们首先将分配给每个地面真理的正样本的数量定义为超参数,并采用替代优化算法来得出最佳选择。然后,我们设计一个动态的样本分配过程,以动态选择每个训练迭代中的最佳阳性数量。实验表明,所得的HPS-DET在不同对象检测基线的基线上带来了改善的性能。此外,我们分析了在不同数据集之间和不同骨架之间转移的高参数可重复使用性,以进行对象检测,这表现出我们方法的优势和多功能性。
translated by 谷歌翻译
在设备训练中,该模型可以通过微调预训练的模型来适应从传感器中收集的新数据。但是,对于具有少量内存资源的物联网设备,训练记忆消耗是过敏的。我们提出了一个算法 - 系统共同设计框架,以便仅使用256KB的内存使设备训练成为可能。在设备训练面临两个独特的挑战:(1)由于比特精确的混合和缺乏归一化而难以优化神经网络的量化图; (2)有限的硬件资源(内存和计算)不允许完整的向后计算。为了应对优化难度,我们提出了量化缩放量表来校准梯度尺度并稳定量化训练。为了减少内存足迹,我们提出稀疏更新,以跳过不太重要的层和子张量的梯度计算。该算法创新是由轻量级训练系统(小型训练引擎)实现的,该系统可修剪向后的计算图,以支持稀疏更新并卸载运行时自动分化以编译时间。我们的框架是第一个实用解决方案,用于在微型IoT设备上进行视觉识别的设备转移学习(例如,仅具有256KB SRAM的微控制器),使用少于1/100的现有框架的存储器,同时匹配云训练的准确性+Tinyml应用程序VWW的边缘部署。我们的研究使IoT设备不仅可以执行推理,还可以不断适应新的数据,以实现终身学习。
translated by 谷歌翻译